Deep transfer learning has been widely used for knowledge transmission in recent years. The standard approach of pre-training and subsequently fine-tuning, or linear probing, has shown itself to be effective in many down-stream tasks. Therefore, a challenging and ongoing question arises: how to quantify cross-task transferability that is compatible with transferred results while keeping self-consistency? Existing transferability metrics are estimated on the particular model by conversing source and target tasks. They must be recalculated with all existing source tasks whenever a novel unknown target task is encountered, which is extremely computationally expensive. In this work, we highlight what properties should be satisfied and evaluate existing metrics in light of these characteristics. Building upon this, we propose Principal Gradient Expectation (PGE), a simple yet effective method for assessing transferability across tasks. Specifically, we use a restart scheme to calculate every batch gradient over each weight unit more than once, and then we take the average of all the gradients to get the expectation. Thus, the transferability between the source and target task is estimated by computing the distance of normalized principal gradients. Extensive experiments show that the proposed transferability metric is more stable, reliable and efficient than SOTA methods.
translated by 谷歌翻译
应付嘈杂标签的大多数现有方法通常假定类别分布良好,因此无法应对训练样本不平衡分布的实际情况的能力不足。为此,本文尽早努力通过长尾分配和标签噪声来解决图像分类任务。在这种情况下,现有的噪声学习方法无法正常工作,因为将噪声样本与干净的尾巴类别的样本区分开来是具有挑战性的。为了解决这个问题,我们提出了一个新的学习范式,基于对弱数据和强数据扩展的推论,以筛选嘈杂的样本,并引入休假散布的正则化,以消除公认的嘈杂样本的效果。此外,我们基于在线先验分布中纳入了一种新颖的预测惩罚,以避免对头等阶层的偏见。与现有的长尾分类方法相比,这种机制在实时捕获班级拟合度方面具有优越性。详尽的实验表明,所提出的方法优于解决噪声标签下长尾分类中分布不平衡问题的最先进算法。
translated by 谷歌翻译
随机分区模型被广泛用于贝叶斯方法中,用于各种聚类任务,例如混合模型,主题模型和社区检测问题。尽管已经对随机分区模型诱导的簇数量进行了广泛的研究,但在很大程度上忽略了有关分区平衡性的另一个重要模型属性。我们通过分析模型如何为具有不同级别平衡度的分区分配概率来定义和理论上研究和理论上研究可交换随机分区模型的平衡性的框架。我们证明,许多现有流行的随机分区模型的“丰富”特征是两个共同假设的必然结果:产品形式的交换性和投影率。我们提出了一种比较随机分区模型的平衡性的原则方法,该模型可以更好地理解哪些模型的工作方式更好,而对于不同的应用程序而言,哪些模型的工作方式更好。我们还介绍了“富裕者”随机分区模型,并说明了它们在实体解决任务中的应用。
translated by 谷歌翻译
可以用表面肌电图定制人类味道感觉。但是,在一个主题(源域)上培训的模式识别模型在其他主题(目标域)上不概括。为了提高使用SEMG数据开发的味觉感觉模型的普遍性和可转移性,在本研究中创新了两种方法:域正则化分析(DRCA)和与缩小质心(CPSC)的共形预测。在具有来自目标域的未标记数据的未标记数据中独立研究了这两种方法的有效性,并且在六个受试者上进行了相同的交叉用户适应管道。结果表明,与仅与源域数据培训的基线模型相比,DRCA改善了六个受试者的分类准确性;,虽然CPSC不保证准确性改进。此外,DRCA和CPSC的组合在六个受试者上呈现了统计学上显着的改进(P <0.05)。结合DRCA和CPSC的拟议策略表明其在解决SEMG的味觉识别应用中的交叉用户数据分布漂移方面的有效性。它还显示了更多交叉用户适应应用程序的潜力。
translated by 谷歌翻译
从各种平台收获的结构点处理数据对机器学习界产生了新的挑战。通过施加矩阵结构以重复观察标记点过程,我们提出了一种新的混合模型的多级标记点过程,用于识别观察到的数据中的潜在异质性。具体地,我们研究了一个矩阵,其条目被标记为Log-Gaussian Cox进程和这种矩阵的簇行。提出了一种有效的半参数期预期 - 解决方案与点流程的功能主成分分析(FPCA)进行了模型估计。通过仿真研究和实际数据分析证明了所提出的框架的有效性。
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译
A recent study has shown a phenomenon called neural collapse in that the within-class means of features and the classifier weight vectors converge to the vertices of a simplex equiangular tight frame at the terminal phase of training for classification. In this paper, we explore the corresponding structures of the last-layer feature centers and classifiers in semantic segmentation. Based on our empirical and theoretical analysis, we point out that semantic segmentation naturally brings contextual correlation and imbalanced distribution among classes, which breaks the equiangular and maximally separated structure of neural collapse for both feature centers and classifiers. However, such a symmetric structure is beneficial to discrimination for the minor classes. To preserve these advantages, we introduce a regularizer on feature centers to encourage the network to learn features closer to the appealing structure in imbalanced semantic segmentation. Experimental results show that our method can bring significant improvements on both 2D and 3D semantic segmentation benchmarks. Moreover, our method ranks 1st and sets a new record (+6.8% mIoU) on the ScanNet200 test leaderboard. Code will be available at https://github.com/dvlab-research/Imbalanced-Learning.
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
The surrogate loss of variational autoencoders (VAEs) poses various challenges to their training, inducing the imbalance between task fitting and representation inference. To avert this, the existing strategies for VAEs focus on adjusting the tradeoff by introducing hyperparameters, deriving a tighter bound under some mild assumptions, or decomposing the loss components per certain neural settings. VAEs still suffer from uncertain tradeoff learning.We propose a novel evolutionary variational autoencoder (eVAE) building on the variational information bottleneck (VIB) theory and integrative evolutionary neural learning. eVAE integrates a variational genetic algorithm into VAE with variational evolutionary operators including variational mutation, crossover, and evolution. Its inner-outer-joint training mechanism synergistically and dynamically generates and updates the uncertain tradeoff learning in the evidence lower bound (ELBO) without additional constraints. Apart from learning a lossy compression and representation of data under the VIB assumption, eVAE presents an evolutionary paradigm to tune critical factors of VAEs and deep neural networks and addresses the premature convergence and random search problem by integrating evolutionary optimization into deep learning. Experiments show that eVAE addresses the KL-vanishing problem for text generation with low reconstruction loss, generates all disentangled factors with sharp images, and improves the image generation quality,respectively. eVAE achieves better reconstruction loss, disentanglement, and generation-inference balance than its competitors.
translated by 谷歌翻译
Surgical robot automation has attracted increasing research interest over the past decade, expecting its huge potential to benefit surgeons, nurses and patients. Recently, the learning paradigm of embodied AI has demonstrated promising ability to learn good control policies for various complex tasks, where embodied AI simulators play an essential role to facilitate relevant researchers. However, existing open-sourced simulators for surgical robot are still not sufficiently supporting human interactions through physical input devices, which further limits effective investigations on how human demonstrations would affect policy learning. In this paper, we study human-in-the-loop embodied intelligence with a new interactive simulation platform for surgical robot learning. Specifically, we establish our platform based on our previously released SurRoL simulator with several new features co-developed to allow high-quality human interaction via an input device. With these, we further propose to collect human demonstrations and imitate the action patterns to achieve more effective policy learning. We showcase the improvement of our simulation environment with the designed new features and tasks, and validate state-of-the-art reinforcement learning algorithms using the interactive environment. Promising results are obtained, with which we hope to pave the way for future research on surgical embodied intelligence. Our platform is released and will be continuously updated in the website: https://med-air.github.io/SurRoL/
translated by 谷歌翻译